legal responsibility
Mercedes to accept legal responsibility for accidents involving self-driving cars
Mercedes has announced that it will take legal responsibility for any crashes that occur while its self-driving systems are engaged. The company is currently in the process of deploying "Drive Pilot" technology for its new S-Class and EQS saloon models, which is "Level 3" for autonomy on a six-tier system devised by Society of Automotive Engineers, ranging from Level 0 (no automated driver assistance) to Level 5 (the car drives itself everywhere without any input from the vehicle occupants). Level 3 autonomy means that drivers may take their hands off the wheel and undertake other tasks, such as reading a book, while the car assumes full control of all driving functions. However, this is only in specific conditions, such as in low-speed traffic on motorways, and the person in the driver's seat must be able to retake control within a few seconds of an alert from the car. This is a big leap from Level 2 autonomy, which requires hands-on-wheel supervision from the driver at all times, and which is currently commonplace on new cars in the form of adaptive cruise control and automated lane-keeping. Some cars from the likes of Audi, Mercedes, BMW, Genesis and Tesla have such advanced systems that they are considered somewhere between Levels 2 and 3 -- dubbed by experts as Level 2 .
- Europe > Germany (0.06)
- North America > United States > Nevada (0.05)
- North America > United States > California (0.05)
- (4 more...)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
Descriptive AI Ethics: Collecting and Understanding the Public Opinion
As we start to encounter AI systems in various morally and legally salient environments, some have begun to explore how the current responsibility ascription practices might be adapted to meet such new technologies [19, 33]. A critical viewpoint today is that autonomous and self-learning AI systems pose a so-called responsibility gap [27]. These systems' autonomy challenges human control over them [13], while their adaptability leads to unpredictability. Hence, it might infeasible to trace back responsibility to a specific entity if these systems cause any harm. Considering responsibility practices as the adoption of certain attitudes towards an agent [40], scholarly work has also posed the question of whether AI systems are appropriate subjects of such practices [15, 29, 37] -- e.g., they might "have a body to kick," yet they "have no soul to damn" [4].
- North America > United States > Virginia (0.04)
- North America > United States > Ohio (0.04)
- Europe > Germany (0.04)
- Law (0.73)
- Social Sector (0.70)
- Government (0.47)
Who's to blame when a machine botches your surgery?
Medicine is an imprecise art, and medical error, whether through negligence or honest mistake, is shockingly common. Some experts believe it to be the third-biggest killer in the US. In the UK, as many as one in six patients receive an incorrect diagnosis from the National Health Service. One of the great promises of artificial intelligence is to drastically reduce the number of mistakes made in the world of health care. For some conditions, the technology is already approaching--and in some cases matching and even exceeding--the success rates of the best specialists. Researchers at the John Radcliffe Hospital in Oxford, for instance, claim to have developed an AI system capable of outperforming cardiologists in identifying heart-attack risk by examining chest scans.
- Europe > United Kingdom (0.36)
- North America > United States > Michigan (0.05)
- North America > United States > California (0.05)